99 research outputs found

    Blockchain-Enabled Federated Learning: A Reference Architecture Design, Implementation, and Verification

    Full text link
    This paper presents an innovative reference architecture for blockchain-enabled federated learning (BCFL), a state-of-the-art approach that amalgamates the strengths of federated learning and blockchain technology. This results in a decentralized, collaborative machine learning system that respects data privacy and user-controlled identity. Our architecture strategically employs a decentralized identifier (DID)-based authentication system, allowing participants to authenticate and then gain access to the federated learning platform securely using their self-sovereign DIDs, which are recorded on the blockchain. Ensuring robust security and efficient decentralization through the execution of smart contracts is a key aspect of our approach. Moreover, our BCFL reference architecture provides significant extensibility, accommodating the integration of various additional elements, as per specific requirements and use cases, thereby rendering it an adaptable solution for a wide range of BCFL applications. Participants can authenticate and then gain access to the federated learning platform securely using their self-sovereign DIDs, which are securely recorded on the blockchain. The pivotal contribution of this study is the successful implementation and validation of a realistic BCFL reference architecture, marking a significant milestone in the field. We intend to make the source code publicly accessible shortly, fostering further advancements and adaptations within the community. This research not only bridges a crucial gap in the current literature but also lays a solid foundation for future explorations in the realm of BCFL.Comment: 14 pages, 15 figures, 3 table

    Low-complexity dynamic resource scheduling for downlink MC-NOMA over fading channels

    Full text link
    In this paper, we investigate dynamic resource scheduling (i.e., joint user, subchannel, and power scheduling) for downlink multi-channel non-orthogonal multiple access (MC-NOMA) systems over time-varying fading channels. Specifically, we address the weighted average sum rate maximization problem with quality-of-service (QoS) constraints. In particular, to facilitate fast resource scheduling, we focus on developing a very low-complexity algorithm. To this end, by leveraging Lagrangian duality and the stochastic optimization theory, we first develop an opportunistic MC-NOMA scheduling algorithm whereby the original problem is decomposed into a series of subproblems, one for each time slot. Accordingly, resource scheduling works in an online manner by solving one subproblem per time slot, making it more applicable to practical systems. Then, we further develop a heuristic joint subchannel assignment and power allocation (Joint-SAPA) algorithm with very low computational complexity, called Joint-SAPA-LCC, that solves each subproblem. Finally, through simulation, we show that our Joint-SAPA-LCC algorithm provides good performance comparable to the existing Joint-SAPA algorithms despite requiring much lower computational complexity. We also demonstrate that our opportunistic MC-NOMA scheduling algorithm in which the Joint-SAPA-LCC algorithm is embedded works well while satisfying given QoS requirements.Comment: 39 pages, 11 figure

    Self-Improving Interference Management Based on Deep Learning With Uncertainty Quantification

    Full text link
    This paper presents a groundbreaking self-improving interference management framework tailored for wireless communications, integrating deep learning with uncertainty quantification to enhance overall system performance. Our approach addresses the computational challenges inherent in traditional optimization-based algorithms by harnessing deep learning models to predict optimal interference management solutions. A significant breakthrough of our framework is its acknowledgment of the limitations inherent in data-driven models, particularly in scenarios not adequately represented by the training dataset. To overcome these challenges, we propose a method for uncertainty quantification, accompanied by a qualifying criterion, to assess the trustworthiness of model predictions. This framework strategically alternates between model-generated solutions and traditional algorithms, guided by a criterion that assesses the prediction credibility based on quantified uncertainties. Experimental results validate the framework's efficacy, demonstrating its superiority over traditional deep learning models, notably in scenarios underrepresented in the training dataset. This work marks a pioneering endeavor in harnessing self-improving deep learning for interference management, through the lens of uncertainty quantification

    Low-complexity joint user and power scheduling in downlink NOMA over fading channels

    Full text link
    Non-orthogonal multiple access (NOMA) has been considered one of the most promising radio access techniques for next-generation cellular networks. In this paper, we study the joint user and power scheduling for downlink NOMA over fading channels. Specifically, we focus on a stochastic optimization problem to maximize the weighted average sum rate while ensuring given minimum average data rates of users. To address this problem, we first develop an opportunistic user and power scheduling algorithm (OUPS) based on the duality and stochastic optimization theory. By OUPS, the stochastic problem is transformed into a series of deterministic ones for the instantaneous weighted sum rate maximization for each slot. Thus, we additionally develop a heuristic algorithm with very low computational complexity, called user selection and power allocation algorithm (USPA), for the instantaneous weighted sum rate maximization problem. Via simulation results, we demonstrate that USPA provides near-optimal performance with very low computational complexity, and OUPS well guarantees given minimum average data rates.Comment: 7 pages, 5 figure

    Dynamic Joint Scheduling of Anycast Transmission and Modulation in Hybrid Unicast-Multicast SWIPT-Based IoT Sensor Networks

    Full text link
    The separate receiver architecture with a time- or power-splitting mode, widely used for simultaneous wireless information and power transfer (SWIPT), has a major drawback: Energy-intensive local oscillators and mixers need to be installed in the information decoding (ID) component to downconvert radio frequency (RF) signals to baseband signals, resulting in high energy consumption. As a solution to this challenge, an integrated receiver (IR) architecture has been proposed, and, in turn, various SWIPT modulation schemes compatible with the IR architecture have been developed. However, to the best of our knowledge, no research has been conducted on modulation scheduling in SWIPT-based IoT sensor networks while taking into account the IR architecture. Accordingly, in this paper, we address this research gap by studying the problem of joint scheduling for unicast/multicast, IoT sensor, and modulation (UMSM) in a time-slotted SWIPT-based IoT sensor network system. To this end, we leverage mathematical modeling and optimization techniques, such as the Lagrangian duality and stochastic optimization theory, to develop an UMSM scheduling algorithm that maximizes the weighted sum of average unicast service throughput and harvested energy of IoT sensors, while ensuring the minimum average throughput of both multicast and unicast, as well as the minimum average harvested energy of IoT sensors. Finally, we demonstrate through extensive simulations that our UMSM scheduling algorithm achieves superior energy harvesting (EH) and throughput performance while ensuring the satisfaction of specified constraints well.Comment: 29 pages, 13 figures (eps

    Remote Bio-Sensing: Open Source Benchmark Framework for Fair Evaluation of rPPG

    Full text link
    Remote Photoplethysmography (rPPG) is a technology that utilizes the light absorption properties of hemoglobin, captured via camera, to analyze and measure blood volume pulse (BVP). By analyzing the measured BVP, various physiological signals such as heart rate, stress levels, and blood pressure can be derived, enabling applications such as the early prediction of cardiovascular diseases. rPPG is a rapidly evolving field as it allows the measurement of vital signals using camera-equipped devices without the need for additional devices such as blood pressure monitors or pulse oximeters, and without the assistance of medical experts. Despite extensive efforts and advances in this field, serious challenges remain, including issues related to skin color, camera characteristics, ambient lighting, and other sources of noise, which degrade performance accuracy. We argue that fair and evaluable benchmarking is urgently required to overcome these challenges and make any meaningful progress from both academic and commercial perspectives. In most existing work, models are trained, tested, and validated only on limited datasets. Worse still, some studies lack available code or reproducibility, making it difficult to fairly evaluate and compare performance. Therefore, the purpose of this study is to provide a benchmarking framework to evaluate various rPPG techniques across a wide range of datasets for fair evaluation and comparison, including both conventional non-deep neural network (non-DNN) and deep neural network (DNN) methods. GitHub URL: https://github.com/remotebiosensing/rppg.Comment: 19 pages, 10 figure

    Adjuvant Chemotherapy in Microsatellite Instability-High Gastric Cancer

    Get PDF
    Purpose Microsatellite instability (MSI) status may affect the efficacy of adjuvant chemotherapy in gastric cancer. In this study, the clinical characteristics of MSI-high (MSI-H) gastric cancer and the predictive value of MSI-H for adjuvant chemotherapy in large cohorts of gastric cancer patients were evaluated. Materials and Methods This study consisted of two cohorts. Cohort 1 included gastric cancer patients who received curative resection with pathologic stage IB-IIIC. Cohort 2 included patients with MSI-H gastric cancer who received curative resection with pathologic stage II/III. MSI was examined using two mononucleotide markers and three dinucleotide markers. Results Of 359 patients (cohort 1), 41 patients (11.4%) had MSI-H. MSI-H tumors were more frequently identified in older patients (p < 0.001), other histology than poorly cohesive, signet ring cell type (p=0.005), intestinal type (p=0.028), lower third tumor location (p=0.005), and absent perineural invasion (p=0.027). MSI-H status has a tendency of better disease-free survival (DFS) and overall survival (OS) in multivariable analyses (hazard ratio [HR], 0.4; p=0.059 and HR, 0.4; p=0.063, respectively). In the analysis of 162 MSI-H patients (cohort 2), adjuvant chemotherapy showed a significant benefit with respect to longer DFS and OS (p=0.047 and p=0.043, respectively). In multivariable analysis, adjuvant chemotherapy improved DFS (HR, 0.4; p=0.040). Conclusion MSI-H gastric cancer had distinct clinicopathologic findings. Even in MSI-H gastric cancer of retrospective cohort, adjuvant chemotherapy could show a survival benefit, which was in contrast to previous prospective studies and should be investigated in a further prospective trial.

    Edge-functionalized graphene-like platelets as a co-curing agent and a nanoscale additive to epoxy resin

    Get PDF
    A newly developed method for the edge-selective functionalization of "pristine" graphite with 4-aminobenzoic acid was applied for the synthesis of 4-aminobenzoyl-functionalized graphite (AB-graphite) through a "direct" Friedel-Crafts acylation in a polyphosphoric acid (PPA)/phosphorus pentoxide medium (P(2)O(5)). The AB moiety at the edge of the AB-graphite played the role of a molecular wedge to exfoliate the AB-graphite into individual graphene and graphene-like platelets upon dispersion in polar solvents. These were used as a co-curing agent and a nanoscale additive to epoxy resin. The physical properties of the resulting epoxy/AB-graphite composites were improved because of the efficient load transfer between the additive and epoxy matrix through covalent links.close191

    High blood viscosity in acute ischemic stroke

    Get PDF
    BackgroundThe changes in blood viscosity can influence the shear stress at the vessel wall, but there is limited evidence regarding the impact on thrombogenesis and acute stroke. We aimed to investigate the effect of blood viscosity on stroke and the clinical utility of blood viscosity measurements obtained immediately upon hospital arrival.MethodsPatients with suspected stroke visiting the hospital within 24 h of the last known well time were enrolled. Point-of-care testing was used to obtain blood viscosity measurements before intravenous fluid infusion. Blood viscosity was measured as the reactive torque generated at three oscillatory frequencies (1, 5, and 10 rad/sec). Blood viscosity results were compared among patients with ischemic stroke, hemorrhagic stroke, and stroke mimics diagnosed as other than stroke.ResultsAmong 112 enrolled patients, blood viscosity measurements were accomplished within 2.4 ± 1.3 min of vessel puncture. At an oscillatory frequency of 10 rad/sec, blood viscosity differed significantly between the ischemic stroke (24.2 ± 4.9 centipoise, cP) and stroke mimic groups (17.8 ± 6.5 cP, p &lt; 0.001). This finding was consistent at different oscillatory frequencies (134.2 ± 46.3 vs. 102.4 ± 47.2 at 1 rad/sec and 39.2 ± 11.5 vs. 30.4 ± 12.4 at 5 rad/sec, Ps &lt; 0.001), suggesting a relationship between decreases in viscosity and shear rate. The area under the receiver operating curve for differentiating cases of stroke from stroke mimic was 0.79 (95% confidence interval, 0.69–0.88).ConclusionPatients with ischemic stroke exhibit increases in whole blood viscosity, suggesting that blood viscosity measurements can aid in differentiating ischemic stroke from other diseases
    corecore